|
![](/i/fill.gif) |
>>I propose a Smart Benchmark, which senses how long a routine is taking,
and
>>terminates after a set period, projecting the time for completion of the
>>'full' program-- whatever the machine. All this code is (or SHOULD be)
>>deterministic, nonrandom, so extrapolation should be straightforward.
>>Maybe an animation, with option to deduct for disk access times.
>
>
>How about an image where the render time scales linearly with image area?
>This way, you simply render the image at what you consider an appropriate
>resolution for your machine (320x240 for a 486/33, 320000x240000 for a
>supercomputer, etc) and then apply a correction factor for the area (the
>486/33 in this example would have a correction factor of x1000000)?
>
>Mark
Interesting notion- certainly simpler than a tricky program. How might this
work?
We have two machines, say, Trusty Rusty (486) and the Swarm Machine I (20
500 MHz Celerons).
Q: The clock speeds on Swarm add up to about 300 times Rusty. Which image
size would we select, perhaps, for the Swarm Machine?
Q: What correction factor would we then apply?
Q: While it's true that image 1000X by 1000Y is 1000000 times greater in
area than image XY, is it also true that a THEORETICAL idealized machine
rendering in POV will also be 1000000 times longer? It's true that area
increases by the square-- what worries me here is that (so far as I know) we
are doing some computations in THREE dimensions. Might not some portions of
the image factor up by the CUBE of the increase instead of merely the
square? Or worse?
Looking forward to your thoughts.
Matt
Post a reply to this message
|
![](/i/fill.gif) |